Synopsys and TSMC Deepen AI Design Alliance: What It Means

Synopsys and TSMC Deepen AI Design Alliance: What It Means
by Kalar Rajendiran on 05-05-2026 at 10:00 am

Synopsys Powering the next generation of AI

A recent announcement from Synopsys signals a meaningful escalation in the race to build next-generation AI hardware. The expanded collaboration between Synopsys and TSMC brings together silicon-proven IP, AI-driven design tools, and cutting-edge manufacturing processes in a tightly integrated effort to accelerate high-performance computing (HPC) and AI system development. More than a routine partnership update, the move reflects a broader industry transition toward ecosystem-level innovation, where success depends on how well design, IP, and fabrication technologies align from the outset.

What Was Announced

At the core of the announcement is a three-part expansion of capabilities spanning IP, design flows, and system-level enablement.

Synopsys is advancing silicon-proven interface IP validated on TSMC’s most advanced nodes, including 3nm and emerging 2nm-class processes. These include next-generation standards such as M-PHY v6.0 which is now achieving industry-first low-power silicon bring-up on N2P, alongside tapeouts of 64G UCIe IP and 224G high-speed interconnect IP. Together, these technologies form the backbone of AI chips that must move massive volumes of data with minimal latency and power overhead, particularly in bandwidth-constrained environments.

The companies are also extending certified electronic design automation (EDA) flows with a sharper emphasis on increasingly agentic AI-driven optimization. Collaboration on run assistance within Synopsys Fusion Compiler, leveraging TSMC’s A14 process and NanoFlex Pro architecture, is aimed at improving power, performance, and area (PPA) while boosting design productivity. This signals a shift from passive AI assistance toward more active, decision-guiding systems that can materially impact how chips are designed at advanced nodes.

Beyond individual dies, the partnership continues to push into advanced packaging and system-level integration. Synopsys’ 3DIC Compiler platform is now enabling productivity improvements for TSMC’s CoWoS technology at interposer sizes reaching up to 5.5 times the reticle limit, underscoring the scale of modern multi-die designs. This is complemented by multiphysics simulation capabilities that address thermal, electrical, and optical interactions. These requirements are becoming essential as chips evolve into tightly integrated systems.

The announcement also highlights expansion into new application domains. In automotive, Synopsys is offering a UCIe IP solution compliant with ASIL B functional safety requirements on TSMC’s N5A process, marking a significant step toward enabling chiplet-based architectures in safety-critical environments. Meanwhile, advancements in M-PHY IP are targeted at next-generation mobile and storage applications, including smartphones that demand both high performance and power efficiency.

Finally, the collaboration advances AI infrastructure through co-packaged optics. Multiphysics design enablement for co-packaged optical systems, including TSMC’s COUPE design flow, spans optical path simulation, electromagnetic extraction, and system-level analysis, and is paired with 224G IP designed to support optical Ethernet and emerging interconnect standards such as UALink. Together, these capabilities directly address the growing bandwidth and energy challenges facing large-scale AI systems.

Why This Matters for AI Hardware

The significance of this partnership lies in how it tackles the core constraints of modern AI workloads. As compute performance scales, the bottlenecks have shifted toward data movement, power efficiency, and system integration. By combining high-speed IP, agentic AI-driven design tools, and advanced packaging technologies, Synopsys and TSMC are reducing the gap between design complexity and manufacturable silicon.

The introduction of agentic run assistance in EDA tools marks a particularly important inflection point. Rather than simply accelerating existing workflows, these capabilities begin to reshape them, enabling engineers to delegate increasingly complex optimization tasks to AI systems. This has the potential to significantly compress development cycles while improving overall design quality.

Equally critical is the focus on bandwidth. Technologies such as 224G interconnects and co-packaged optics are emerging as key enablers for scaling AI infrastructure, where moving data efficiently is often more challenging than processing it. By integrating these capabilities into both IP and design flows, the partnership addresses one of the most pressing limitations in next-generation AI systems.

The expansion into automotive and mobile markets further underscores the breadth of this strategy. It signals that advanced-node, multi-die, and chiplet-based designs are no longer confined to hyperscale data centers but are beginning to permeate safety-critical and consumer applications as well.

Market And Industry Implications

The expanded alliance reinforces Synopsys’s position as a central player in AI silicon enablement while strengthening TSMC’s ecosystem around its most advanced process nodes. For chip designers, tighter integration between EDA tools and foundry technologies can translate into faster time-to-market and reduced development risk, particularly when targeting cutting-edge nodes.

At the same time, the partnership reflects a broader industry dynamic in which design tools and manufacturing processes are becoming increasingly interdependent. As flows become more deeply optimized and certified for specific nodes, the cost and complexity of switching ecosystems rise. This creates a form of strategic lock-in that benefits tightly aligned partners while raising barriers for competitors.

The Bigger Picture

Taken together, the announcement illustrates a shift in how semiconductor innovation is defined in the AI era. Progress is no longer driven solely by transistor scaling but by the ability to coordinate across multiple layers of the technology stack, from design software and reusable IP to packaging and system integration.

The Synopsys–TSMC collaboration points to a future where chips are conceived not as isolated components but as parts of larger, highly integrated systems spanning data centers, vehicles, and mobile devices. In this landscape, competitive advantage will increasingly depend on how effectively companies can bring together tools, technologies, and partners to deliver complete, optimized solutions.

As AI continues to push the limits of performance and complexity, partnerships like this are likely to define the pace of innovation. The companies that succeed will be those that can bridge the gap between design intent and real-world deployment, turning increasingly sophisticated ideas into scalable, manufacturable systems.

You can access the entire press announcement here.

Also Read:

How to Overcome the Advanced Node Physical Verification Bottleneck

Podcast EP342: The Evolution and Impact of Physical AI with Hezi Saar

WEBINAR: Beyond Moore’s Law and The Future of Semiconductor Manufacturing Intelligence


Dr. L.C. Lu on TSMC Advanced Technology Design Solutions

Dr. L.C. Lu on TSMC Advanced Technology Design Solutions
by Daniel Nenni on 05-01-2026 at 6:00 am

L.C. Lu TSMC Senior Fellow and Vice President, Research and Development Design & Technology Platform (1)
Dr. L.C. Lu is Vice President of Research & Development / Design & Technology Platform at Taiwan Semiconductor Manufacturing Co. Ltd. (TSMC) and a TSMC Senior Fellow.

L.C. leads efforts in design enablement, ensuring that the company can meet the diverse and evolving requirements of its global customer base. Prior to this, he headed the Design and Technology Platform organization starting in 2018.

Since joining TSMC in 2000, Dr. Lu has held multiple leadership positions in design services. He has worked closely with process R&D teams to pioneer Design and Technology Co-Optimization (DTCO), improving speed, power efficiency, and density in advanced process technologies. He has also collaborated extensively with ecosystem partners through the TSMC Open Innovation Platform (OIP), helping deliver comprehensive design solutions and intellectual property for a wide range of applications, including high-performance computing, automotive, RF, and advanced 2.5D and 3D designs.

Dr. Lu’s contributions have earned him significant recognition. He received Taiwan’s National Outstanding Manager Award in 2012 and was named a TSMC Senior Fellow in 2025. He is also one of the company’s most prolific inventors, holding more than 100 patents worldwide.

He earned his bachelor’s degree in electrical engineering from National Taiwan University, a master’s degree in computer science from National Tsing Hua University, and a Ph.D. in computer science from Yale University.

L.C.’s presentation focuses on advanced design-technology co-optimization (DTCO), packaging innovations, and AI-driven methodologies that enable continued scaling in performance, power, and area (PPA) for next-generation semiconductor systems. The discussion highlights how tightly coupled design and process innovations, along with system-level integration, are critical to sustaining Moore’s Law in the era of AI and HPC.

At the device and design level, TSMC emphasizes DTCO and design-driven cell (DDCL) innovations to achieve node-to-node scaling from N5 through N2 and into A14. The introduction of NanoFlex and NanoFlex Pro architectures enables flexible standard cell design with significant gains in efficiency. N2 NanoFlex achieves up to 50% speed improvement at constant voltage or 50% power reduction at constant performance compared to traditional cells. Building on this, A14 NanoFlex Pro introduces a 1.5× cell height merged oxide diffusion (OD) architecture, significantly improving OD utilization and enabling tighter placement of high-speed and low-power cells. This results in 10–15% speed gains and ~20% area reduction relative to N2, effectively delivering multi-node scaling benefits within a single generation.

https://x.com/SemiAnalysis_/status/2047888356701306916

Further enhancements in N2P and N2U nodes incorporate advanced DTCO and power delivery optimizations. Hybrid dual-rail architectures reduce minimum operating voltage (Vmin) by over 200 mV compared to single-rail designs, achieving approximately 40% energy savings. N2U extends N2P with incremental improvements—3–4% higher performance or 8–10% lower power—while maintaining full compatibility with existing design rules and IP, ensuring smooth adoption for customers.

EDA readiness and AI integration are key enablers of these advanced nodes. TSMC collaborates closely with electronic design automation (EDA) partners to ensure tool readiness and to incorporate AI-enhanced workflows. Agentic AI systems are being deployed across design cycles to optimize block placement, routing, and performance, improving both productivity and design quality. These AI techniques are also applied to analog and RF design, enabling efficient migration across process nodes and accelerating time-to-market.

At the system level, TSMC’s advanced packaging technologies—particularly CoWoS, SoIC, and 3D Fabric—play a central role in enabling AI scaling. CoWoS technology continues to scale reticle size and integration capacity, allowing significant increases in compute density. From 2024 to 2029, the number of transistors in a single CoWoS system is projected to increase by 48×, driven by larger package sizes, increased system-on-chip (SoC) counts, and transition to advanced nodes such as TSMC A14.

Memory bandwidth scaling is similarly aggressive, with high-bandwidth memory (HBM) integration increasing both capacity and throughput. HBM stacks are expected to grow from 8 to 24, while I/O bandwidth per stack doubles and data rates increase significantly, resulting in an overall 34× bandwidth improvement. This scaling is supported by advancements in both DRAM technology and logic-based base dies fabricated on advanced nodes.

Interconnect performance is improved through finer pitch scaling in both 2.5D and 3D integration. In CoWoS, micro-bump pitch reduction enhances bandwidth density and energy efficiency, while in SoIC, scaling to ~4.5 µm bump pitch delivers up to 4× bandwidth density and substantial energy savings. Additionally, silicon photonics integration via CUPE optical engines provides high-speed, low-latency interconnects, achieving 5–10× power efficiency improvements and 10–20× latency reduction compared to traditional electrical links.

Power delivery and thermal management are identified as critical challenges in AI systems due to increasing compute density. TSMC addresses these through advanced capacitance solutions such as metal-insulator-metal (MIM) capacitors and embedded deep trench capacitors (eDDC), achieving over 10× improvements in capacitance density and reducing voltage droop significantly. Thermal optimization techniques—including improved packaging materials, hotspot spreading, and structural enhancements—reduce thermal resistance by up to 40%, ensuring reliable operation under high power conditions.

Bottom line: TSMC is advancing design methodologies through 3D IC design standardization and AI-driven automation. The introduction of “3D Blocks” as a modular design language aims to streamline 3D IC workflows and enhance collaboration across the ecosystem, with ongoing efforts toward IEEE standardization. Combined with generative AI and agent-based design optimization, these innovations promise substantial improvements in productivity and scalability for complex chip-package co-design.

Also Read:

Dr. Cliff Hou and the TSMC N2 Process Technology

TSMC Technology Symposium 2026 Overview


Dr. Y.J. Mii on TSMC Technology Leadership in 2026

Dr. Y.J. Mii on TSMC Technology Leadership in 2026
by Daniel Nenni on 04-30-2026 at 8:00 am

Y.J. Mii Executive Vice President and Co Chief Operating Officer, TSMC (1)
Dr. Y.J. Mii is Executive Vice President and Co-Chief Operating Officer at Taiwan Semiconductor Manufacturing Co. Ltd. (TSMC).

Dr. Y.J. Mii joined TSMC in 1994 as a manager at Fab 3 before moving into the company’s research and development organization in 2001. He was appointed Vice President of R&D in 2011 and later advanced to Senior Vice President in November 2016.

Over more than 20 years at TSMC, Dr. Mii has played a central role in advancing and manufacturing cutting-edge CMOS technologies across both fab operations and R&D. He led the successful development of key process nodes, including 90nm, 40nm, and 28nm. In addition, he has driven innovation in more advanced technologies—such as 16nm, 7nm, 5nm, and 3nm—helping sustain TSMC’s leadership position in the global semiconductor foundry industry.

In recognition of his leadership in research and development, Dr. Mii received the IEEE Frederik Philips Award in 2022. Prior to joining TSMC, he worked as a research staff member at the IBM Research Center.

Dr. Mii holds 34 patents worldwide, including 25 granted in the United States. He earned his bachelor’s degree in electrical engineering from National Taiwan University, and both his master’s and Ph.D. in electrical engineering from University of California, Los Angeles.

Dr. Y.J. Mii’s presentation outlines the company’s continued leadership in semiconductor technology and its roadmap for future innovation across advanced logic, system integration, and specialty platforms. The talk emphasizes TSMC’s commitment to delivering cutting-edge technologies that support next-generation applications such as AI, high-performance computing (HPC), and mobile devices.

TSMC is introducing several new advanced nodes, including A14, A13, and A12, which extend its leadership into what is described as the “Armstrong era.” The A14 node represents a second-generation nanosheet transistor technology and incorporates NanoFlex Pro, achieving significant improvements in performance, power, and area (PPA). Compared to the 2nm (N2) node, A14 delivers 10–15% speed improvement or 25–30% power reduction, along with notable density gains. Production is expected by 2028. Building on this, A13 offers further optimization, including a 6% die size reduction through optical shrink and improved efficiency, while maintaining backward compatibility with A14 designs.

TSMC’s 2nm family is also expanding, including N2, N2P, N2X, and N2U. These technologies are already seeing strong customer adoption, particularly driven by AI and HPC demands. N2 entered production recently, with N2P and A16 progressing toward volume production. The N2U variant further enhances performance and efficiency while maintaining compatibility with N2P, offering incremental speed and power improvements. The rapid increase in customer tapeouts highlights the strong industry demand for these advanced nodes.

Beyond nanosheet transistors, TSMC is investing in future innovations such as complementary field-effect transistors (CFET), which stack nFET and pFET vertically to enable continued scaling. The company has already demonstrated early CFET implementations and advanced SRAM designs with reduced footprint. Additionally, research into two-dimensional materials shows significant improvements in transistor performance, suggesting further opportunities for scaling and energy efficiency.

Interconnect technology is another key focus area. TSMC is improving copper-based interconnects by reducing resistance and capacitance through new materials and structures. It is also exploring alternative materials and air-gap techniques to further enhance performance. Long-term research includes novel 2D conductors that could dramatically reduce contact resistance compared to existing solutions.

In system integration, TSMC is advancing its HPC platform through technologies such as CoWoS, SoIC, and SoW. CoWoS remains a central platform for scaling, with increasing reticle sizes and high-bandwidth memory (HBM) integration planned through 2030. SoW technology aims to integrate entire systems on a wafer, enabling massive computing capabilities for AI workloads. Meanwhile, SoIC 3D stacking continues to evolve, improving interconnect density and power efficiency.

The company is also developing photonic integration technologies like the Compact Universal Photonic Engine (COUPE), which enables high-speed, low-power optical data transmission. These solutions significantly outperform traditional copper interconnects in both power efficiency and latency, and future advancements aim to further increase bandwidth and scalability.

In the specialty technology segment, TSMC highlights advancements in automotive, RF, memory, and display technologies. The N3A node is now fully automotive-qualified, while future nodes like N2A are in development. RF technologies such as N4C RF deliver improved power efficiency and performance for edge AI applications. In memory, embedded flash is being replaced by alternatives like resistive RAM (RRAM) and MRAM, which offer better scalability and performance. Display innovations, including high-voltage platforms, enable more efficient and compact designs for smartphones and smart glasses.

Bottom line: TSMC’s roadmap demonstrates a comprehensive approach to semiconductor innovation, spanning advanced nodes, new transistor architectures, system integration, and specialized technologies. The company aims to empower customers with industry-leading solutions that drive future computing advancements and enable emerging applications across multiple industries.

Also Read: 

Enabling Next-Generation AI Through Advanced Packaging and 3D Fabric Integration

Dr. Cliff Hou and the TSMC N2 Process Technology

The Shift to System-Level AI Drives Next-Generation Silicon

TSMC Technology Symposium 2026 Overview


Enabling Next-Generation AI Through Advanced Packaging and 3D Fabric Integration

Enabling Next-Generation AI Through Advanced Packaging and 3D Fabric Integration
by Kalar Rajendiran on 04-29-2026 at 10:00 am

CoWoS Enables AI Compute Scaling

The rapid rise of artificial intelligence is fundamentally reshaping computing architectures. As AI models scale toward trillions of parameters, traditional approaches to performance improvement are no longer sufficient. Instead, the industry is entering a new era where system-level innovation, advanced packaging, and 3D integration are becoming the primary drivers of progress. This shift reflects a broader transition in computing, where performance gains increasingly depend on how well entire systems are designed and integrated, rather than how small individual transistors can become.

The End of One-Dimensional Scaling

AI compute demand is growing at an exponential rate, creating a widening gap between required performance and what conventional silicon scaling can deliver. Bridging this gap requires innovation beyond the chip itself. The most important shift is that AI performance is now determined at the system level rather than purely at the silicon level. Future gains will depend on how effectively compute, memory, interconnect, and power systems are integrated into a cohesive whole. This marks a transition from device-centric optimization to full-stack co-design, extending from transistor technology all the way to data center architecture.

Data Movement Is the New Bottleneck

A critical constraint in modern AI systems is no longer computation, but data movement. Transporting data across chips can consume up to 50 times more energy than moving data within a single chip. At the same time, data transfer can account for the majority of system activity, significantly reducing accelerator utilization due to communication delays. This shift makes interconnect efficiency a central design priority. Improving bandwidth, reducing latency, and minimizing energy per bit are now essential to unlocking overall system performance.

The Memory Wall Is Getting Worse

As AI models continue to scale, memory demands are increasing even faster than compute capabilities. Emerging workloads, such as long-context processing and multimodal AI, are driving exponential growth in both memory capacity and bandwidth requirements. Systems are transitioning from gigabyte-scale memory to terabyte-scale configurations, while also demanding lower latency. However, memory technology is not advancing at the same pace as compute, creating a widening imbalance. Overcoming this “memory wall” is therefore essential for sustaining AI progress, and it is driving rapid innovation in high-bandwidth memory and memory integration strategies.

Power and Thermal Constraints Are Critical

The increase in compute density, particularly with the adoption of 3D stacking technologies, has led to a corresponding rise in power density and heat generation. These factors are quickly becoming limiting constraints for AI system scaling. Without significant advancements in power delivery, energy efficiency, and thermal management, performance gains cannot be sustained. As a result, power and cooling are no longer secondary considerations but have become central to system design and overall performance.

3D Fabric Technologies: The New Foundation

To address these challenges, advanced 3D fabric technologies are emerging as the foundation of next-generation AI systems. These technologies enable the integration of multiple chips and components into highly efficient, high-performance systems. Innovations such as 3D chip stacking allow for dramatically higher interconnect density, reducing both data movement distance and energy consumption. Advanced packaging platforms make it possible to combine logic and memory in close proximity, enabling massive bandwidth and capacity scaling. At the same time, high-bandwidth memory continues to evolve, delivering higher throughput and improved energy efficiency. Together, these advancements position packaging not merely as a supporting technology, but as a primary driver of system performance.

Co-Packaged Optics: Rethinking Interconnects

As electrical interconnects approach their physical limits, co-packaged optics is emerging as a promising solution for high-speed data transfer. By integrating photonics directly with compute hardware, this approach enables significant improvements in both power efficiency and latency. It also provides a scalable path forward for data center networking, where the need for higher bandwidth and lower energy consumption continues to grow. This evolution signals a broader shift toward optical technologies as a key enabler of future AI infrastructure.

System-on-Wafer and Wafer-Scale Integration

Looking further ahead, system integration is advancing toward wafer-scale architectures, where entire systems are built on a single substrate. This approach enables unprecedented levels of integration density while reducing the overhead associated with traditional interconnects. By minimizing communication distances and improving efficiency, wafer-scale integration offers a powerful pathway for scaling AI performance beyond the limits of conventional packaging methods.

The Rise of System Technology Co-Optimization (STCO)

As AI systems grow more complex, optimizing individual components in isolation is no longer sufficient. The industry is increasingly adopting System Technology Co-Optimization, an approach that simultaneously considers chip design, packaging, interconnects, power delivery, and thermal behavior. This holistic methodology ensures that all parts of the system are designed to work together efficiently, enabling better overall performance and energy efficiency. It represents a fundamental shift in how hardware systems are conceived and developed.

Summary

The future of AI hardware will not be defined by silicon scaling alone. Instead, it will be shaped by advances in packaging, interconnects, memory systems, and power efficiency, all brought together through system-level design. In this new paradigm, the system itself becomes the primary unit of innovation. Success will depend on the ability to integrate across multiple domains and optimize them collectively. As this transformation continues, it is clear that the “system” has effectively become the new chip, redefining how performance is achieved in the age of AI.

Also Read:

Dr. Cliff Hou and the TSMC N2 Process Technology

The Shift to System-Level AI Drives Next-Generation Silicon

All in One Bluetooth Audio: A Complete Solution on a TSMC 12nm Single Die

TSMC Technology Symposium 2026 Overview


Dr. Cliff Hou and the TSMC N2 Process Technology

Dr. Cliff Hou and the TSMC N2 Process Technology
by Daniel Nenni on 04-28-2026 at 8:00 am

Cliff Hou, Senior Vice President and Deputy Co COO, TSMC
Dr. Cliff Hou is Senior Vice President, Deputy Co-COO, and Chief Information Security Officer at TSMC, where he also serves as deputy to Y.P. Chyn. Over a long career with the company since joining in 1997, he has played a pivotal role in advancing TSMC’s design technology and ecosystem strategy.

Before assuming his current position, Dr. Hou held several key leadership roles. He served as Vice President of Design and Technology Platform from 2011 to 2018, and later as Vice President of Technology Development starting in August 2018. Earlier in his career, from 1997 to 2007, he established TSMC’s technology design kit and reference flow development organizations, laying the foundation for its design enablement infrastructure.

Over the past decade, Dr. Hou has been instrumental in building TSMC Open Innovation Platform (OIP), which has grown into one of the most comprehensive design ecosystems in the global semiconductor industry. His work in reference flows and design-for-manufacturing (DFM) has significantly lowered barriers to IC design and improved accessibility for customers.

In recognition of his contributions, Dr. Hou received the National Manager Excellence Award in 2010. He also led TSMC’s OIP project team to win the National Industry Innovation Award in 2011, presented by the Ministry of Economic Affairs in Taiwan.

Prior to joining TSMC, Dr. Hou worked at the Industrial Technology Research Institute (ITRI/CCL) as a section manager focused on design environments. He also served as an associate professor at I-Shou University (formerly Kaohsiung Polytechnic Institute).

Dr. Hou holds 44 U.S. patents and serves on the board of directors of Global Unichip Corp.. He earned his bachelor’s degree in control engineering from National Chiao Tung University and a Ph.D. in electrical and computer engineering from Syracuse University.

Cliff’s presentation outlined the significant progress and achievements made by TSMC over the past year in semiconductor manufacturing, focusing on technology advancement, capacity expansion, advanced packaging, global footprint, and sustainability initiatives.

In 2025 TSMC made strong strides in both cutting-edge technology and production capacity. The company’s most advanced node, TSMC N2, has already entered volume production. Despite its increased complexity compared to previous generations, TSMC has achieved an improved yield learning curve, demonstrating its manufacturing excellence. The next iteration, featuring backside power delivery remains on track and is progressing according to schedule.

TSMC has also made advancements in automotive technology, with its N3A node now production-ready and capable of meeting stringent quality requirements. Across all advanced nodes, including 3nm, 5nm, and 7nm, the company continues to refine performance and reliability to support a wide range of applications. Additionally, TSMC is aggressively expanding its advanced packaging technologies to meet growing demand for HPC and AI applications.

A major highlight is the rapid expansion of 2nm production capacity. TSMC is ramping up five phases of 2nm fabs within a single year—an unprecedented pace. As a result, first-year output for 2nm is projected to be 45% higher than that of the previous 3nm generation. Looking ahead, the company plans to further increase 2nm capacity by approximately 70% between 2026 and 2028. Meanwhile, combined capacity for 3nm and 5nm technologies is expected to grow steadily by about 25% over several years.

To address the time constraints associated with building new fabs, TSMC is leveraging artificial intelligence and digital transformation to optimize existing facilities. AI-driven systems improve scheduling, equipment efficiency, and process optimization, enabling higher throughput and reduced production cycle times. Generative AI is also used to fine-tune process parameters, while data analytics helps minimize downtime and maximize tool utilization. These innovations allow TSMC to extract greater productivity from existing capacity while new fabs are under construction.

Demand for AI and HPC applications is a key driver of growth. From 2022 to 2026, the number of wafers shipped for AI accelerators is expected to increase elevenfold. Notably, large-die chips (over 500 mm²) are also seeing strong growth, with shipments increasing sixfold. TSMC’s accumulated experience across multiple generations has enabled consistent improvements in yield and defect density, even for these complex designs.

Beyond leading-edge technologies, TSMC continues to invest in mature nodes, including specialty processes such as radio frequency, high-voltage, analog, embedded memory, and image sensors. The company aims to remain the leading provider in this segment while expanding capacity in a measured and strategic manner.

In advanced packaging, TSMC is pushing the boundaries of 3D integration technologies, such as CoWoS and SoIC. These technologies are critical for enabling chiplet-based architectures and high-bandwidth memory integration. The company has reduced the time required to transition from development to high-volume manufacturing—by 30% for CoWoS and 75% for SoIC—helping customers bring products to market faster. Collaboration with ecosystem partners, including material suppliers and testing providers, has further improved yield and manufacturing efficiency. Packaging capacity is also expanding aggressively, with significant growth projected through 2027.

TSMC’s global expansion strategy is another key focus. The company is doubling its pace of fab construction, with nine new or converted phases planned annually in 2025 and 2026—twice the historical average. This expansion extends beyond Taiwan to include major investments in the United States, Japan, and Germany.

In Arizona, TSMC’s first fab is already in production, with additional phases under construction targeting advanced nodes such as 3nm and 2nm. The company is also planning advanced packaging facilities and acquiring additional land to support long-term growth. In Japan, the Kumamoto fab has entered production and is expanding capacity, while a second fab is being developed with a revised focus on 3nm technology. In Germany, a new fab in Dresden is under construction, targeting automotive and industrial applications. Across these regions, TSMC has demonstrated the ability to replicate high yields comparable to its Taiwan operations.

Sustainability and green manufacturing are central to TSMC’s long-term vision. The company aims to achieve net-zero carbon emissions by 2050 and has already reduced emissions by 3.8 million tons in 2025 alone. Resource recycling is another priority, with goals of 70% internal recycling and up to 98% total recycling by 2030. Water stewardship initiatives target 100% water positivity by the 2040s, with significant progress already made through reclaimed water usage and conservation efforts.

Bottom line: TSMC is aggressively advancing semiconductor technology while scaling capacity to meet surging demand, particularly in AI and HPC. Through innovation in manufacturing, packaging, and AI-driven optimization, combined with global expansion and sustainability commitments, the company is positioning itself to remain a leader in the semiconductor industry for years to come.

Also Read:

TSMC Technology Symposium 2026 Overview

TSMC to Elon Musk: There are no Shortcuts in Building Fabs!

TSMC Technology Symposium 2026: Advancing the Future of Semiconductor Innovation


The Shift to System-Level AI Drives Next-Generation Silicon

The Shift to System-Level AI Drives Next-Generation Silicon
by Kalar Rajendiran on 04-27-2026 at 8:00 am

TSMC Advanced Technology Roadmap

At its 2026 Technology Symposium, TSMC delivered a clear message: the AI era has entered a new phase. The primary constraint is no longer model capability, but the systems required to run those models at scale. Addressing this shift will demand significant advances in semiconductor technology, spanning compute, memory, interconnects, and power efficiency.

From Model Scaling to System Scaling

Over the past several years, AI progress was largely driven by scaling models. In other words, expanding parameter counts, improving training methods, and unlocking new reasoning capabilities. That paradigm is now evolving. In 2026, the bottleneck has shifted to system-level challenges such as compute throughput, memory bandwidth, interconnect efficiency, power delivery, and deployment scale. AI is becoming fundamentally a systems problem rather than a purely algorithmic one.

This transition is especially visible in the rise of enterprise AI agents. These systems are moving beyond narrow task assistance to orchestrating workflows, integrating enterprise data, and enabling more autonomous decision-making. As a result, they require high reliability, strong security, and sustained performance, all of which significantly increase infrastructure demands.

Explosive Growth in AI Compute Demand

AI compute demand continues to grow at an extraordinary pace, driven by both training and inference. On the training side, large language models have already driven roughly fivefold annual increases in compute requirements. And the shift toward multimodal AI which combines text, vision, audio, and real-world signals, is accelerating this trend further. Training demand alone is expected to increase by another order of magnitude.

Even more striking is the growth in inference. Token generation has increased more than 500 times between 2022 and 2025, and new techniques such as chain-of-thought reasoning are significantly increasing compute per query. The emergence of agent-based AI systems could multiply this demand again, while large-scale multimodal deployments may push total inference workloads toward million-fold growth. As a result, inference is rapidly becoming the dominant driver of compute infrastructure expansion.

AI Is Expanding Beyond the Cloud

AI is no longer confined to centralized cloud environments; it is rapidly expanding into edge and physical domains. At the edge, inference is increasingly being performed directly on devices such as PCs, smartphones, and wearables. This shift enables lower latency, improved privacy, and real-time responsiveness, and is driving the widespread adoption of dedicated AI accelerators like NPUs in consumer hardware.

At the same time, physical AI is bringing intelligence into the real world through robotics and embodied systems. These applications require tight integration of AI with sensing, actuation, and real-time control, all within strict power and reliability constraints. Together, these trends highlight the growing need for silicon solutions that can balance performance, efficiency, and compact form factors across a wide range of environments.

Data Center Scaling Enters Hyper-Growth

The rapid expansion of AI workloads is fundamentally reshaping data center infrastructure. Annual capacity additions, which previously grew at a steady rate of around 5 to 6 gigawatts, are now expected to reach 30 to 40 gigawatts per year. At the same time, overall data center investment growth has accelerated from roughly 10 percent annually before the rise of generative AI to more than 30 percent per year through the end of the decade.

This growth is not just about adding capacity; it is about delivering efficient, reliable, and scalable systems. Energy efficiency and total cost of ownership are becoming central concerns, making semiconductor-level improvements critical to the sustainability of AI infrastructure.

TSMC’s Technology Roadmap: Key Innovations

A14: Next-Generation Logic Platform (2028)

A14 represents TSMC’s next major step in logic technology, combining second-generation nanosheet transistors with NanoFlex Pro architecture and continued backend scaling innovations. Compared with the N2 node, A14 is expected to deliver a 10 to 15 percent speed improvement at the same power or a 25 to 30 percent power reduction at the same speed, along with approximately 1.2 times the logic density.

A central innovation in A14 is NanoFlex Pro, which enhances standard cell architecture to improve area efficiency and performance per watt. This is complemented by significant backend scaling advancements, including tighter metal pitch and reduced minimum metal area, enabling higher transistor density and improved overall efficiency. Together, these innovations demonstrate that progress at advanced nodes now depends on full-stack optimization rather than transistor scaling alone.

A13 and A12: Extending the Platform

Building on A14, TSMC is extending its roadmap with A13 and A12 technologies, both targeted for production around 2029. A13 further improves density and efficiency while maintaining backward compatibility with A14, enabling smoother design migration for customers. A12 introduces backside power delivery, a major innovation that improves power integrity and performance by separating power and signal routing. These developments reflect a broader shift toward holistic scaling, where power delivery and system-level considerations play an increasingly important role.

N2 Family: Nanosheet Era in Production

The N2 node marks TSMC’s transition from FinFET to nanosheet transistor architecture, delivering improved electrostatic control, reduced leakage, and lower operating voltage. These benefits translate into tangible efficiency gains in real-world applications.

The N2 family includes several variants designed to address different performance needs. The base N2 node entered production in 2025, followed by N2P in 2026 as an enhanced version. N2X, expected in 2027, targets high-performance applications with additional frequency gains, while N2U, planned for 2028, integrates NanoFlex Pro enhancements to further improve performance and power efficiency. This expanding family underscores the importance of offering flexible solutions tailored to diverse workloads.

Advanced Packaging and 3D Integration

As AI workloads continue to scale, advanced packaging technologies are becoming as critical as process nodes themselves. TSMC is advancing its chiplet and 3D integration capabilities with improvements such as second-generation CoWoS technology, which reduces interconnect resistance and enables higher bandwidth through finer I/O pitch.

These innovations allow for denser integration of compute and memory, improving performance and energy efficiency at the system level. In the AI era, packaging is no longer a secondary consideration but a key enabler of overall system performance.

N3: Today’s Workhorse Node

While future nodes attract significant attention, the N3 family remains the backbone of current high-performance computing. It is widely deployed across mobile devices, CPUs, AI accelerators, and networking applications, with multiple variants such as N3P and N3C supporting different use cases. Strong customer adoption and a robust pipeline of new designs highlight the continued importance of mature leading-edge nodes in delivering value across the ecosystem.

Summary

TSMC’s roadmap reflects a fundamental shift in the semiconductor industry. As AI continues to scale, the primary challenge is no longer developing more powerful models, but building the infrastructure required to support them efficiently. This requires innovation across the entire technology stack, from transistors and interconnects to packaging and system architecture.

In this new era, success will depend on the ability to deliver not just better chips, but better systems. The companies that can integrate performance, efficiency, and scalability at every level of the stack will define the future of AI—and increasingly, that future is being shaped at the silicon level.

Also Read:

TSMC Technology Symposium 2026 Overview

TSMC to Elon Musk: There are no Shortcuts in Building Fabs!

TSMC Technology Symposium 2026: Advancing the Future of Semiconductor Innovation


All in One Bluetooth Audio: A Complete Solution on a TSMC 12nm Single Die

All in One Bluetooth Audio: A Complete Solution on a TSMC 12nm Single Die
by Daniel Nenni on 04-27-2026 at 6:00 am

All in One Bluetooth Audio A Complete Solution on a TSMC 12nm Single Die

The rapid evolution of wireless audio has placed unprecedented demands on system integration, power efficiency, and performance. Against this backdrop, the webinar “All-in-One Bluetooth Audio: A Complete Solution on a TSMC 12nm Single Die” offers a timely and technically rich exploration of how modern semiconductor design is meeting these challenges. For engineers, architects, and product leaders working in wireless audio, connectivity, or system-on-chip (SoC) design, this session provides both practical insights and a forward-looking perspective on integration trends shaping the industry.

REGISTER HERE

At the heart of the webinar is a detailed examination of a fully integrated Bluetooth audio solution implemented on a single die using advanced 12nm process technology from TSMC. Moving to a single-die architecture represents a significant shift from traditional multi-chip or module-based designs. By consolidating RF front-end, baseband processing, digital signal processing (DSP), memory, and power management into one silicon platform, designers can achieve tighter coupling between subsystems, reduced latency, and improved energy efficiency. This level of integration is particularly critical for applications such as true wireless earbuds, smart headsets, and embedded audio systems, where size, battery life, and performance must be optimized simultaneously.

One of the key reasons to attend this webinar is the opportunity to understand the architectural trade-offs involved in such high levels of integration. Designing on a 12nm node introduces both opportunities and constraints. While the process enables higher transistor density and lower power consumption, it also requires careful attention to analog/RF performance, noise isolation, and thermal considerations. The session is expected to walk through these challenges, offering insights into how designers balance digital scaling benefits with the sensitivities of RF and mixed-signal blocks.

Another compelling aspect of the webinar is its focus on system-level optimization. Bluetooth audio is no longer just about connectivity; it is about delivering high-quality, low-latency audio experiences under strict power budgets. Attendees will gain visibility into how DSP pipelines are structured for efficient audio processing, how coexistence mechanisms are implemented to handle interference, and how power management strategies are designed to extend battery life without compromising performance. These are not abstract concepts but practical considerations that directly impact product success in competitive consumer markets.

The webinar also promises to cover silicon validation and real-world performance metrics. This is particularly valuable because it bridges the gap between theoretical design and deployed systems. Understanding how a single-die solution performs in terms of power consumption, latency, RF robustness, and audio fidelity provides attendees with a benchmark for their own designs. It also offers a clearer picture of what is achievable with current process technology and integration techniques.

Beyond the technical depth, the webinar is relevant because it reflects a broader industry trend toward consolidation and platformization. As wireless audio devices become more ubiquitous, the ability to deliver complete, scalable solutions on a single chip is becoming a competitive differentiator. Engineers who understand these trends will be better positioned to design future-proof systems and make informed decisions about architecture, process nodes, and integration strategies.

Finally, attending this webinar is an efficient way to stay current in a fast-moving field. Instead of piecing together information from disparate sources, participants can gain a cohesive understanding of end-to-end Bluetooth audio system design in a single session. Whether you are an RF engineer looking to understand digital integration impacts, a DSP developer interested in system constraints, or a product engineer evaluating design trade-offs, the content is directly applicable to real-world challenges.

REGISTER HERE

Bottom line: This webinar is more than a product overview; it is a deep technical dive into the future of integrated wireless audio systems. By attending, you gain not only knowledge of a specific implementation but also a framework for thinking about integration, efficiency, and performance in next-generation designs.

Also Read:

From Satellites to 5G: Ceva’s PentaG-NTN™ Lowers Barriers for Terminal Innovators

Ceva IP: Powering the Era of Physical AI

Ceva Wi-Fi 6 and Bluetooth IPs Power Renesas’ First Combo MCUs for IoT and Connected Home


Carbon in the Age of AI Chips: What the Semiconductor Industry Needs to Know This Earth Day

Carbon in the Age of AI Chips: What the Semiconductor Industry Needs to Know This Earth Day
by Admin on 04-23-2026 at 6:00 am

Carbon in the Age of AI Chips

Stephen Russell: Senior Technical Fellow, TechInsights

Every April, Earth Day prompts a flurry of corporate sustainability pledges and green-tinted press releases. But for the semiconductor industry in 2026, the conversation has moved well past pledges. Carbon accountability is now a procurement requirement, a regulatory expectation, and increasingly a design constraint. This Earth Day, TechInsights is releasing a new sustainability report, Carbon in the Age of AI Chips. Authored by TechInsights Senior Technical Fellow Stephen Russell and Senior Sustainability Analyst Lara Chamness, the report examines where semiconductor emissions are actually coming from, why AI is accelerating the problem faster than most reporting methods can track, and what engineering, procurement, and sustainability teams can do about it right now.

Here’s a preview of what’s inside:

The Scale of the Problem Is Getting Harder to Ignore

Start with the headline numbers. Fabrication emissions are projected to reach 186 million metric tons of CO₂e in 2026, a record high, rising to approximately 247 million metric tons by 2030. Leading-edge technologies below 4nm will account for 26% of total emissions this year, climbing to 42% by 2030. Those are not abstract figures. They represent real consequences of real decisions: which fab to use, which memory configuration to specify, which supplier to source from.

What makes 2026 feel genuinely different from prior years is the convergence of three forces pushing carbon upstream into product decisions. Advanced manufacturing keeps getting more energy- and resource-intensive, especially at leading-edge logic and high-layer 3D NAND. AI demand is driving unprecedented silicon and memory intensity per system, not just more units but fundamentally heavier systems. And procurement teams are being asked, with increasing urgency, to defend supplier choices with traceable carbon logic rather than slide-deck narratives.

Manufacturing Carbon Is a Strategic Variable, Not a Fixed Cost

One of the report’s central arguments is that manufacturing carbon should not be treated as a black box or a rounding error. It is a strategic variable, and it responds to specific decisions.

The report’s Sustainability Matrix maps carbon hotspots across device types and toolsets. For advanced logic nodes, Scope 2 emissions driven by electricity are concentrated in lithography. For 3D NAND, dry etch can account for nearly half of total manufacturing emissions, driven by high-power plasma processes and high global warming potential gases. Some of those gases carry a 100-year GWP of around 25,000 times that of CO₂.

Perhaps the most striking case study involves backside power delivery (BSPD), a major scaling innovation that many assume carries a straightforward carbon penalty due to added process complexity. The reality is more nuanced. In an illustrative comparison of Intel 18A manufactured in the United States versus TSMC N2 manufactured in Taiwan, the Intel process results in lower manufacturing CO₂e per die. Not because it is simpler, but because the U.S. grid is cleaner. The electricity mix where a chip is fabricated can outweigh the complexity of the manufacturing process itself. That is a finding with immediate implications for anyone making sourcing or fab-selection decisions.

AI Hardware Is Scaling Emissions Faster Than Shipments

The report’s treatment of AI accelerators is where the numbers become genuinely striking. TechInsights’ Global AI GPU Manufacturing Carbon Emissions Forecast shows that by 2030, manufacturing emissions from AI GPU production are projected to rise more than twelvefold, from approximately 1.8 million metric tons CO₂e in 2024 to 21.6 million metric tons CO₂e. AI GPU manufacturing is expected to account for roughly 8.7% of total semiconductor die fabrication emissions by 2030. The average accelerator is expected to exceed one metric ton of CO₂e per unit by 2029.

The driver is not primarily bigger logic dies. It is memory, specifically high-bandwidth memory (HBM). The average AI accelerator is expected to integrate roughly 250 HBM dies by 2030. NVIDIA’s Rubin Ultra-class designs are projected to approach approximately 1 TB of HBM through higher stack counts and heights. As Stephen Russell notes, the AI-driven surge in HBM and advanced memory is likely to raise semiconductor manufacturing emissions in absolute terms, increasing memory wafer starts and adding process complexity even as leading manufacturers improve efficiency per transistor.

There is a subtler dimension the report explores carefully: yield. Stacking dies compounds yield loss, and in tall-stack HBM scenarios, stacking yields above 93% are necessary to prevent emissions per usable stack from rising sharply. That makes yield learning and process control first-order sustainability levers, not just cost levers.

The Client Device Story: Carbon Paid Upfront

The report’s third major focus is consumer and enterprise devices, where on-device AI is often framed as an operational efficiency win. Fewer cloud calls, lower network load, specialized local hardware: all genuine benefits. But the manufacturing emissions for those devices are paid upfront, and they are concentrated in places that might surprise you.

Using teardown-based analysis of AI PCs including recent Microsoft Surface and ASUS Zenbook models, the report finds a consistent pattern: memory and storage, not the headline processor or NPU, account for the majority of embodied carbon in AI PC platforms. Across the examples evaluated, memory accounts for roughly 43% to 57% of packaged-IC CO₂e, while the applications processor accounts for only about 14% to 21%.

The supplier concentration finding is particularly actionable. In one Surface Laptop 7 configuration, three suppliers account for approximately 73% of packaged-IC carbon. In a Zenbook S14 model, three suppliers account for roughly 84%. A small number of parts and vendors determine most of the footprint, which means platform configuration and supplier selection are among the highest-leverage carbon choices a product team can make.

What Can Actually Be Done

The report identifies a clear set of high-leverage actions: pursuing cleaner electricity and power purchase agreements, reducing yield loss and rework especially late in the flow and in stacked memory, substituting low-GWP gases with higher abatement efficiency, improving tool energy and utilization, and optimizing bit density and platform configuration.

Semiconductor sustainability has become a decision problem. The highest-impact choices around fab location, memory configuration, supplier mix, and platform architecture are made before a product ships, carrying carbon consequences that most legacy reporting methods cannot capture. The full report, Carbon in the Age of AI Chips, is available now.

LINK: Carbon in the Age of AI Chips | Earth Day eBook | TechInsights

Stephen Russell: Senior Technical Fellow

As Senior Technical Fellow for Sustainability at TechInsights, Stephen provides expert insight into carbon footprint across the entire technology life cycle, from raw materials through product manufacturing, use and end of life. Stephen also works on unique initiatives to characterize Scope 3 emissions in the use phase of consumer electronics products, with further reaching implications for data center and automotive applications.

Stephen is internationally recognized for technical research contributions and collaborations. These include being awarded best paper 2018 for the IEEE Journal Transactions on Power Electronics paper “High Temperature Electrical and Thermal Aging Performance and Application Considerations for SiC power DMOSFETs”. He led an exploratory research project in gallium oxide for power devices, presenting findings to the Royal Institution, London. While working in industry, he led the development of a new silicon IGBT product line and instigated a research and development project to use silicon carbide JFETs in circuit protection applications.

Also Read:

Kirin 9030 Hints at SMIC’s Possible Paths Toward >300 MTr/mm2 Without EUV

Cost, Cycle Time, and Carbon aware TCAD Development of new Technologies

5 Expectations for the Memory Markets in 2025


TSMC Technology Symposium 2026 Overview

TSMC Technology Symposium 2026 Overview
by Daniel Nenni on 04-22-2026 at 12:00 pm

Semiconductor Revenue $1T Accelleration

Yes, it is that time of year again, the 2026 TSMC Technology Symposium kick-off event in Silicon Valley. TSMC has never been in a better position to forecast the future of semiconductor technology and the industry itself. TSMC closely collaborates with the top semiconductor companies around the world and the top players in the semiconductor ecosystem. Never in the history of TSMC have they been in such a prominent position and the information that comes from that is astounding.

Dr. Kevin Zhang, Senior Vice President and Deputy Co-COO, again honored us with a press briefing before the event which is what we are talking about today. Next week we will talk in more detail about the event itself and some of the announcements. Unlike most of the media I will be there live with SemiWiki blogger Kalar Rajendiran.

Again, this is TSMC’s perspective on the semiconductor industry but it is backed collectively by the entire semiconductor ecosystem, absolutely.

The semiconductor industry is entering a new phase of accelerated growth and architectural transformation, driven primarily by artificial intelligence (AI) and high-performance computing (HPC). Recent projections indicate that semiconductor market growth has significantly outpaced earlier expectations, rising from a forecasted 10% to an actual 23% annual increase, with future projections reaching approximately 45% growth . This rapid expansion is largely attributed to AI-driven demand, which is reshaping both technology development and system-level design.

Could this be the first year that the semiconductor industry outpaces TSMC? Hard to believe but yes. TSMC revenue is expected by me to grow 30-40%. Here is the hitch: While wafer pricing is stable chip pricing is not. The bulk of the 45% revenue growth is due to chip pricing versus chip unit sales. Memory pricing is a big part of this but some of the AI chips (NVIDIA) are also selling at a premium.

A major milestone in this transformation is the advancement of global semiconductor revenue toward the $1 trillion mark, now expected to be achieved earlier than previously projected. As illustrated in the industry trend chart, AI represents the latest inflection point following previous computing waves such as PCs, the internet, and smartphones. By 2030, the semiconductor market is expected to exceed $1.5 trillion, with HPC and AI contributing over 55% of total demand, far surpassing other segments like smartphones (20%), automotive (10%), and IoT (10%) .

At the core of this growth is continuous innovation in semiconductor process technology. The roadmap for advanced nodes demonstrates a steady progression from nanometer-scale fabrication toward angstrom-class technologies. Nodes such as TSMC N2 and its enhanced derivative TSMC N2U focus on improving power, performance, and area (PPA) through design-technology co-optimization (DTCO). According to the technical data presented, N2U offers a 3–4% speed improvement at constant power, up to 10% power reduction at the same speed, and a modest increase in logic density. These incremental improvements are critical for maximizing return on investment for chip designers while maintaining compatibility with previous node designs.

Further advancements are seen in next-generation nodes such as A13, which extend technology leadership through optical shrink techniques. A 97% optical shrink enables approximately 6% area reduction while preserving backward compatibility in design rules. This allows designers to benefit from improved density without requiring extensive redesign, thereby accelerating product deployment.

While transistor scaling remains important, it is no longer sufficient to meet the exponential demands of AI workloads. Consequently, advanced packaging and system integration technologies have become central to performance scaling. Technologies such as CoWoS (Chip-on-Wafer-on-Substrate) and SoIC (System-on-Integrated-Chips) enable heterogeneous integration of logic and memory components. The HPC platform diagram illustrates how advanced logic dies, high-bandwidth memory (HBM), and photonic components are integrated into a single package to maximize compute density and efficiency.

The scaling of interposer size is a key enabler of this integration.  Interposer capacity is expanding from 3.3 reticle sizes to over 14 reticles by 2029, supporting up to 24 HBM stacks. This expansion allows for massive increases in memory bandwidth and compute capability. Additionally, wafer-scale integration technologies such as System-on-Wafer (SoW) extend this concept further, enabling integration at scales exceeding 40 reticles, equivalent to 64 HBM stacks.

Three-dimensional stacking technologies also play a critical role in enhancing interconnect density and power efficiency. SoIC technology enables vertical integration with significantly higher interconnect density—up to 56× compared to traditional 2.5D approaches—and improved power efficiency. This shift from planar to vertical integration reflects a broader industry trend toward system-level optimization rather than purely transistor-level scaling.

The impact of these innovations is evident in system-level performance metrics. The number of compute transistors within a single CoWoS package is projected to increase by up to 48× between 2024 and 2029. Similarly, memory bandwidth is expected to scale by 34× during the same period, driven by advancements in HBM technology and integration techniques.

Another critical innovation is the adoption of co-packaged optics (CPO) for high-speed interconnects. Traditional electrical interconnects face limitations in power efficiency and latency. By integrating optical communication directly into the package, systems can achieve up to 10× improvements in power efficiency and 20× reductions in latency, as shown in a performance comparison chart. This transition from electrical to optical signaling is essential for scaling AI infrastructure, where massive data movement between compute units is required.

Beyond data centers, semiconductor advancements are also enabling the emergence of physical AI applications, particularly in automotive and robotics. Modern vehicles are evolving into compute-centric platforms with significantly increased silicon content, incorporating advanced processors, sensors, and connectivity modules. Looking forward, humanoid robots represent a convergence of digital AI and physical interaction, requiring integrated systems for sensing, computation, motion control, and power management.

Bottom line: The semiconductor industry is transitioning from traditional scaling paradigms to a holistic, system-level approach that integrates advanced process nodes, heterogeneous packaging, photonics, and AI-driven architectures. This convergence is enabling unprecedented growth in computational capability and will define the technological landscape of the next decade.

Also Read:

TSMC to Elon Musk: There are no Shortcuts in Building Fabs!

TSMC Technology Symposium 2026: Advancing the Future of Semiconductor Innovation

Global 2nm Supply Crunch: TSMC Leads as Intel 18A, Samsung, and Rapidus Race to Compete


TSMC to Elon Musk: There are no Shortcuts in Building Fabs!

TSMC to Elon Musk: There are no Shortcuts in Building Fabs!
by Daniel Nenni on 04-17-2026 at 10:00 am

Elon Musk Terafab 2026

The opening of the TSMC 2026 earning call series brought no surprises. CC Wei has done more than 30 such calls since taking the CEO position in 2018 and he never fails to disappoint. Once again, CC Wei reported numbers above guidance driven by strong demand and flawless execution. This illustrates the benefit of TSMC’s close collaborations and deeply trusted relationships with partners and customers. The TSMC forecast is the most trusted forecast the semiconductor industry will ever see, absolutely.

I do remember the one-time CC Wei did disappoint on an earnings call and that was during COVID which was a painful supply chain lesson for all. CC Wei turned that COVID supply chain experience into a “Why supply chain trust and resilience is so important” master class that goes to the heart of the TSMC mission statement and that is Trust.

“Our mission is to be the trusted technology and capacity provider of the global logic IC industry for years to come.”

As expected, TSMC N5 and N3 accounted for the majority of 2026 revenue meaning that margins are also well above 60% and look to stay that way in the not-so-distant future. TSMC N3 is also fast approaching the 5-year depreciation mark so TSMC corporate margins will only go up from here.

As we discussed before, TSMC N3 is the final node in the record setting FinFET family of process technologies and it has ZERO competition in the merchant foundry business. I remember tracking design wins when N3 was first launched and realizing that TSMC N3 would be the most dominant process node I would ever see in my 40+ year semiconductor career and that is certainly the case as it stands today.

CC Wei: In Taiwan, we are adding a new 3-nanometer fab to our GIGAFAB cluster in Tainan Science Park. Volume production is scheduled for the
first half of 2027. In Arizona, our second fab will also utilize 3-nanometer technologies. Construction is already complete and volume
production will begin in the second half of 2027. In Japan, we now plan to utilize 3-nanometer technology in our second fab and volume
production is scheduled in 2028.

CC Wei also discussed moving more N5 capacity to N3. Samsung has reportedly fixed their yield problems at 5/4nm so it makes complete sense for TSMC to focus on the higher margin N3 process technologies. Besides, it is easier to move a TSMC N5 design to TSMC N3 than to Samsung 4nm and much easier than moving a design to Samsung 3/2nm (GAA) so CC Wei’s strategy is clear and sound.

CC Wei: Next, let me talk about our N2 capacity expansion plan. Our practice is to prioritize the land in Taiwan to support the fast ramp of our newest
node due to the need for tight integration with R&D operations. Today, our new node, N2, has already entered high-volume manufacturing
in the fourth quarter of 2025 with good yield. N2 is ramping successfully in multiple phases at both Hsinchu and Kaohsiung site, supported
by strong demand from both smartphone and HPC/AI applications.

In regards to TSMC N2, TSMC’s N3 dominance not only sets up customers for a smooth transition to the N2 process family, it brings forward the strongest ecosystem of partners the semiconductor industry has ever seen, which is a very big deal. There is little doubt that TSMC will dominate the 2nm process node. I’m just wondering how big the NOT TSMC market will be at 2nm? It was next to zero at 3nm due to the lack of competition. I hope 2nm will be different with Intel Foundry 18AP and Samsung Foundry SF2 offering viable alternatives to TSMC N2, and maybe even Rapidus 2nm.

The call was closed out with a TSMC A14 Status. Will TSMC A14 again dominate the foundry business? Or better yet; How big will the NOT TSMC market be at 14 Angstrom? It is too soon to tell but my guess would be that the NOT TSMC market will continue to grow due to supply chain concerns.

CC Wei: Finally, let me talk about our A14 status. Featuring our second-generation nanosheet transistor structure, A14 will deliver another full-node
stride from N2, with performance and power benefit to address the insatiable need for high performance and energy efficient computing. Compared with N2, A14 will provide 10% to 15% speed improvement at the same power for 25% to 30% power improvement at the same
speed and close to 20% chip density gain.

Our A14 technology development is on track and progressing well. We are observing a high level of customer interest and engagement from both smartphone and HPC applications. Volume production is scheduled for 2028. Our A14 technology and its derivatives will further extend our technology leadership position and enable TSMC to capture the growth opportunities well into the future.

Of course there were references to Elon Musk and Terafab during the Q&A. CC Wei offered Elon Musk some very sound advice:

CC Wei: Again, let me say that it takes two to three years to build a new fab. No shortcuts. And it takes another one to two years to ramp it up. Again,
that’s a fundamental of foundry industry. And whether we try to win them back (Intel and Tesla), actually, they are still our customers and we are very confident in our technology position. And we work very hard to capture every piece of business possible.

Did you get that Elon? No short cuts in semiconductor manufacturing.

In regards to CapEX, TSMC raised CapEX from $40-41B in 2025 to $52-56B in 2026 which is huge! CC Wei mentioned that TSMC would probably be at the high end of that when asked during the Q&A. In my opinion TSMC will definitely be at the high end of that and maybe even higher. It all depends on how well the NOT TSMC market is developing

Also Read:

TSMC Technology Symposium 2026: Advancing the Future of Semiconductor Innovation

Global 2nm Supply Crunch: TSMC Leads as Intel 18A, Samsung, and Rapidus Race to Compete

TSMC Process Simplification for Advanced Nodes